perm filename CORRES[W83,JMC] blob sn#701709 filedate 1983-03-04 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00004 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	corres[w83,jmc]		The correspondence theory is right for robots
C00011 00003		The information we must give our robot or program it to obtain
C00012 00004	Remarks:
C00013 ENDMK
CāŠ—;
corres[w83,jmc]		The correspondence theory is right for robots

	Epistemology for robots
	Artificial intelligence and epistemology
	Artificial intelligence and the theory of truth

	Science has influenced philosophy before, and research in making
machines behave intelligent raises issues for philosophy, especially
for epistemology.  It takes sides in some traditional philosophical
issues, and it makes salient some problems that philosophers generally
claim are within the intellectual jurisdiction of philosophy but which don't
seem to have interested any philosopher in particular.

	In designing a computer program with general
common sense, we must build into it some general view of the world.
Everyone involved in such efforts seems to believe that machines
can and maybe even do think, but logically this isn't required.
Even if one believes philosophical arguments that no machine can
really think and all thinking is done by the designer, we still
have to advise the designer on what he should put into the machine
so that it will have the desired behavior.

	To fix the ideas, imagine that we are designing a robot as
a domestic servant.  Suppose it to be approximately the same size
and shape as a human and to have the same number of limbs.  However,
we are not trying for an accurate imitation.  It can have electric
motors to run its joints and TV cameras for eyes, etc.  Let it be
controlled by a computer of the conventional sort, which may be
built into the robot or controlling it remotely according to the
convenience of IBM or whatever company offers it for sale.
Besides its physical work, we want it to communicate.  It must
keep track of what household supplies it uses and buy more when
required.  It must ask its owners what they want it to cook, and
it must be able to consult databanks for recipes and it must
transfer money in order to pay the bills incurred by its owner
as a consequence of its activities.  It must remember engagements
and other commitments for its owner and it must warn him when
its program determines that he has made conflicting commitments.
It must sometimes produce output that makes people believe
that its joints and other organs will operate in such a way as
will produce a given result.  For purely mnemonic reasons, we
may call this pseudo-making pseudo-promises.

	In the previous paragraph I have tried to avoid language that
suggests that the robot genuinely believes or wants.  I may have been
unsuccessful, because I believe it is appropriate to apply such terms to
machines as simple as thermostats.  See (McCarthy 1980).  If the reader is
finicky, let him further expurgate the preceding paragraph of terms that
he thinks only appropriate for describing humans.  Of course, the reader
may not believe that a robot can ever be built that its owner will find
useful, but many philosophical objectors to ascribing mental qualities
to machines grant the performance but balk at using mental terminology.
If the reader balks at servant-like performance, and it is part of the
thesis of this paper that servant-like performance won't be achieved
without more progress in epistemology, then let him consider much
simpler systems.  Indeed most of this paper deals with data structures
and programs required for quite simple systems.

	It is becoming increasingly prevalent in artificial intelligence
research to use languages of mathematical logic, especially first order
theories, for expressing in the memory of a computer its data about
its environment, the goals it has been asked to achieve and subgoals
it has generated from them, and the general rules that give the effects
of the various actions that it may perform.  The programs themselves
take actions on the basis of the sentences in the programs memory and
put more sentences in memory based inputs from the sensory devices
connected to the computer.  Often sentences are not the only data;
arrays of numbers represent more compactly the brightness and color
of the points in the field of view of a television camera.  However,
the programs that process visual data often produce as their output
sentences in suitable first order languages.  The processes that
produce new sentences from old ones in the memory sometimes correspond
to deduction in first order logic but often they don't.  Sometimes
this because the AI researchers aren't familiar enough with the
value of logical inference for expressing reasoning, but it is clear
that some kinds of non-monotonic inference are required for successful
action that really don't correspond to deduction.  (McCarthy 1980)
introduces one mode of non-monotonic reasoning, and there are others.

	Besides the view of the world we give the robot, we must also
have our own view of the internal state of the robot.  The following
kinds of questions arise.

	1. What does the robot know?  Or pseudo-know.  If the reader
prefers "pseudo-know", what properties does pseudoknowledge have?
Should true sentences in the data base area marked BELIEFS be
regarded as pseudo-known.

*****

Truth:

	The light AI casts on traditional philosophical issues
is illustrated by the question of truth.
	The information we must give our robot or program it to obtain
from the world is determined by the tasks it must perform.  Here are some
tasks and related information:

	1. 
Remarks:

1. Usually when I advance these views, they are attacked on behalf of some
currently held philosophical position.  However, no alternative proposals
are advanced about what information should be put in the robot's database,
how it should be expressed and what we should consider it as meaning.  It
is agreed that the problems are within the jurisdiction of the
Almalgamated Philosophers and Linguists, and the subject is changed.